109,656 research outputs found

    Nonparametric Bayesian multiple testing for longitudinal performance stratification

    Full text link
    This paper describes a framework for flexible multiple hypothesis testing of autoregressive time series. The modeling approach is Bayesian, though a blend of frequentist and Bayesian reasoning is used to evaluate procedures. Nonparametric characterizations of both the null and alternative hypotheses will be shown to be the key robustification step necessary to ensure reasonable Type-I error performance. The methodology is applied to part of a large database containing up to 50 years of corporate performance statistics on 24,157 publicly traded American companies, where the primary goal of the analysis is to flag companies whose historical performance is significantly different from that expected due to chance.Comment: Published in at http://dx.doi.org/10.1214/09-AOAS252 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    On the domain-specificity of mindsets: The relationship between aptitude beliefs and programming practice

    Get PDF
    This is the author's accepted manuscript. The final published article is available from the link below. Copyright @ 2013 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.Deliberate practice is important in many areas of learning, including that of learning to program computers. However, beliefs about the nature of personal traits, known as mindsets, can have a profound impact on such practice. Previous research has shown that those with a fixed mindset believe their traits cannot change; they tend to reduce their level of practice when they encounter difficulty. In contrast, those with the growth mindset believe their traits are flexible; they tend to maintain regular practice despite the level of difficulty. However, focusing on mindset as a single construct focused on intelligence may not be appropriate in the field of computer programming. Exploring this notion, a self-belief survey was distributed to undergraduate software engineering students. It revealed that beliefs about intelligence and programming aptitude formed two distinct constructs. Furthermore, the mindset for programming aptitude had greater utility in predicting software development practice, and a follow-up survey showed that it became more fixed throughout instruction. Thus, educators should consider the role of programming-specific beliefs in the design and evaluation of introductory courses in software engineering. In particular, they need to situate and contextualize the growth messages that motivate students who experience early setbacks

    On the half-Cauchy prior for a global scale parameter

    Get PDF
    This paper argues that the half-Cauchy distribution should replace the inverse-Gamma distribution as a default prior for a top-level scale parameter in Bayesian hierarchical models, at least for cases where a proper prior is necessary. Our arguments involve a blend of Bayesian and frequentist reasoning, and are intended to complement the original case made by Gelman (2006) in support of the folded-t family of priors. First, we generalize the half-Cauchy prior to the wider class of hypergeometric inverted-beta priors. We derive expressions for posterior moments and marginal densities when these priors are used for a top-level normal variance in a Bayesian hierarchical model. We go on to prove a proposition that, together with the results for moments and marginals, allows us to characterize the frequentist risk of the Bayes estimators under all global-shrinkage priors in the class. These theoretical results, in turn, allow us to study the frequentist properties of the half-Cauchy prior versus a wide class of alternatives. The half-Cauchy occupies a sensible 'middle ground' within this class: it performs very well near the origin, but does not lead to drastic compromises in other parts of the parameter space. This provides an alternative, classical justification for the repeated, routine use of this prior. We also consider situations where the underlying mean vector is sparse, where we argue that the usual conjugate choice of an inverse-gamma prior is particularly inappropriate, and can lead to highly distorted posterior inferences. Finally, we briefly summarize some open issues in the specification of default priors for scale terms in hierarchical models
    • …
    corecore